415 research outputs found
An Analogue-Digital Model of Computation: Turing Machines with Physical Oracles
We introduce an abstract analogue-digital model of computation that couples Turing machines to oracles that are physical processes. Since any oracle has the potential to boost the computational power of a Turing machine, the effect on the power of the Turing machine of adding a physical process raises interesting questions. Do physical processes add significantly to the power of Turing machines; can they break the Turing Barrier? Does the power of the Turing machine vary with different physical processes? Specifically, here, we take a physical oracle to be a physical experiment, controlled by the Turing machine, that measures some physical quantity. There are three protocols of communication between the Turing machine and the oracle that simulate the types of error propagation common to analogue-digital devices, namely: infinite precision, unbounded precision, and fixed precision. These three types of precision introduce three variants of the physical oracle model. On fixing one archetypal experiment, we show how to classify the computational power of the three models by establishing the lower and upper bounds. Using new techniques and ideas about timing, we give a complete classification.info:eu-repo/semantics/publishedVersio
Solomonoff Induction Violates Nicod's Criterion
Nicod's criterion states that observing a black raven is evidence for the
hypothesis H that all ravens are black. We show that Solomonoff induction does
not satisfy Nicod's criterion: there are time steps in which observing black
ravens decreases the belief in H. Moreover, while observing any computable
infinite string compatible with H, the belief in H decreases infinitely often
when using the unnormalized Solomonoff prior, but only finitely often when
using the normalized Solomonoff prior. We argue that the fault is not with
Solomonoff induction; instead we should reject Nicod's criterion.Comment: ALT 201
What can polysemy tell us about theories of explanation?
Philosophical accounts of scientific explanation are broadly divided into ontic and epistemic views. This paper explores the idea that the lexical ambiguity of the verb to explain and its nominalisation supports an ontic conception of explanation. I analyse one argument which challenges this strategy by criticising the claim that explanatory talk is lexically ambiguous, 375â394, 2012). I propose that the linguistic mechanism of transfer of meaning, 109â132, 1995) provides a better account of the lexical alternations that figure in the systematic polysemy of explanatory talk, and evaluate the implications of this proposal for the debate between ontic and epistemic conceptions of scientific explanation
Why the Realist-Instrumentalist Debate about Rational Choice Rests on a Mistake
Within the social sciences, much controversy exists about which status should be ascribed to the rationality assumption that forms the core of rational choice theories. Whilst realists argue that the rationality assumption is an empirical claim which describes real processes that cause individual action, instrumentalists maintain that it amounts to nothing more than an analytically set axiom or âas ifâ hypothesis which helps in the generation of accurate predictions. In this paper, I argue that this realist-instrumentalist debate about rational choice theory can be overcome once it is realised that the rationality assumption is neither an empirical description nor an âas ifâ hypothesis, but a normative claim
Isabelle Modelchecking for insider threats
The Isabelle Insider framework formalises the technique of social explanation for modeling and analysing Insider threats in infrastructures including physical and logical aspects. However, the abstract Isabelle models need some refinement to provide sufficient detail to explore attacks constructively and understand how the attacker proceeds. The introduction of mutable states into the model leads us to use the concepts of Modelchecking within Isabelle. Isabelle can simply accommodate classical CTL type Modelchecking. We integrate CTL Modelchecking into the Isabelle Insider framework. A running example of an IoT attack on privacy motivates the method throughout and illustrates how the enhanced framework fully supports realistic modeling and analysis of IoT Insiders
Raising argument strength using negative evidence: A constraint on models of induction
Both intuitively, and according to similarity-based theories of induction, relevant evidence raises argument strength when it is positive and lowers it when it is negative. In three experiments, we tested the hypothesis that argument strength can actually increase when negative evidence is introduced. Two kinds of argument were compared through forced choice or sequential evaluation: single positive arguments (e.g., âShostakovichâs music causes alpha waves in the brain; therefore, Bachâs music causes alpha waves in the brainâ) and double mixed arguments (e.g., âShostakovichâs music causes alpha waves in the brain, Xâs music DOES NOT; therefore, Bachâs music causes alpha waves in the brainâ). Negative evidence in the second premise lowered credence when it applied to an item X from the same subcategory (e.g., Haydn) and raised it when it applied to a different subcategory (e.g., AC/DC). The results constitute a new constraint on models of induction
Mental States Are Like Diseases
While Quineâs linguistic behaviorism is well-known, his Kant Lectures contain one of his most detailed discussions of behaviorism in psychology and the philosophy of mind. Quine clarifies the nature of his psychological commitments by arguing for a modest view that is against âexcessively restrictiveâ variants of behaviorism while maintaining âa good measure of behaviorist disciplineâŚto keep [our mental] terms under controlâ. In this paper, I use Quineâs Kant Lectures to reconstruct his position. I distinguish three types of behaviorism in psychology and the philosophy of mind: ontological behaviorism, logical behaviorism, and epistemological behaviorism. I then consider Quineâs perspective on each of these views and argue that he does not fully accept any of them. By combining these perspectives we arrive at Quineâs surprisingly subtle view about behaviorism in psychology
Neural models that convince: Model hierarchies and other strategies to bridge the gap between behavior and the brain.
Computational modeling of the brain holds great promise as a bridge from brain to behavior. To fulfill this promise, however, it is not enough for models to be 'biologically plausible': models must be structurally accurate. Here, we analyze what this entails for so-called psychobiological models, models that address behavior as well as brain function in some detail. Structural accuracy may be supported by (1) a model's a priori plausibility, which comes from a reliance on evidence-based assumptions, (2) fitting existing data, and (3) the derivation of new predictions. All three sources of support require modelers to be explicit about the ontology of the model, and require the existence of data constraining the modeling. For situations in which such data are only sparsely available, we suggest a new approach. If several models are constructed that together form a hierarchy of models, higher-level models can be constrained by lower-level models, and low-level models can be constrained by behavioral features of the higher-level models. Modeling the same substrate at different levels of representation, as proposed here, thus has benefits that exceed the merits of each model in the hierarchy on its own
- âŚ